-
Notifications
You must be signed in to change notification settings - Fork 603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[new opmath 1] Toggle __use_new_opmath
#5269
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Hello. You may have forgotten to update the changelog!
|
Qottmann
force-pushed
the
enable_new_opmath
branch
from
February 27, 2024 17:36
82f6dbd
to
fc0eb52
Compare
…o enable_new_opmath
…o enable_new_opmath
…o enable_new_opmath
…o enable_new_opmath
…o enable_new_opmath
…o enable_new_opmath
…o enable_new_opmath
…o enable_new_opmath
…o enable_new_opmath
…o enable_new_opmath
…o enable_new_opmath
…o enable_new_opmath
…nnylane into enable_new_opmath
lillian542
approved these changes
Mar 25, 2024
albi3ro
approved these changes
Mar 25, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉
5 tasks
mlxd
added a commit
that referenced
this pull request
Mar 27, 2024
) ### Before submitting Please complete the following checklist when submitting a PR: - [x] All new features must include a unit test. If you've fixed a bug or added code that should be tested, add a test to the test directory! - [x] All new functions and code must be clearly commented and documented. If you do make documentation changes, make sure that the docs build and render correctly by running `make docs`. - [x] Ensure that the test suite passes, by running `make test`. - [x] Add a new entry to the `doc/releases/changelog-dev.md` file, summarizing the change, and including a link back to the PR. - [x] The PennyLane source code conforms to [PEP8 standards](https://www.python.org/dev/peps/pep-0008/). We check all of our code against [Pylint](https://www.pylint.org/). To lint modified files, simply `pip install pylint`, and then run `pylint pennylane/path/to/file.py`. When all the above are checked, delete everything above the dashed line and fill in the pull request template. ------------------------------------------------------------------------------------------------------------ **Context:** When Torch has a GPU backed data-buffer, failures can occur when attempting to make autoray-dispatched calls to Torch method with paired CPU data. In this case, for probabilities on the GPU, and eigenvalues on the host (read from the observables), failures appeared with `qml.dot`, and can be reproduced from: ```python import pennylane as qml import torch import numpy as np torch_device="cuda" dev = qml.device("default.qubit.torch", wires=2, torch_device=torch_device) ham = qml.Hamiltonian(torch.tensor([0.1, 0.2], requires_grad=True), [qml.PauliX(0), qml.PauliZ(1)]) @qml.qnode(dev, diff_method="backprop", interface="torch") def circuit(): qml.RX(np.zeros(5), 0) # Broadcast the state by applying a broadcasted identity return qml.expval(ham) res = circuit() assert qml.math.allclose(res, 0.2) ``` This pair modifies the registered `coerce` method for Torch to always automigrate mixed CPU-GPU data to always favour the associated GPU. In addition, this method now also catches multi-GPU data, where tensors do not reside on the same index, and will fail outright. As a longer term solution, moving the Torch GPU dispatch calls to earlier in the stack would be more sound, but this fixes the aforementioned issue, at the expense of always migrating from CPU to GPU. **Description of the Change:** As above. **Benefits:** Allows automatic data migration from host to device when using a GPU backed tensor. In addition, will catch multi-GPU tensor data when using Torch, and fail due to non-local representations. **Possible Drawbacks:** Auto migration may not always be wanted. The alternative solution is to always be explicit about locality, and move the eigenvalue data to exist on the device at a higher layer in the stack. **Related GitHub Issues:** #5269 introduced changes that resulted in GPU errors.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR to toggle the
__use_new_opmath
and make it default.This will require several changes in the codebase (and further updates to demos, dataset and plugins).
Bigger changes should be offloaded in separate PRs. Small updates can be gathered here.
Branching: #5269 > #5216 > #5322 > #5335